![]() Procedure and apparatus for intra-prediction on screen (Machine-translation by Google Translate, not
专利摘要:
Procedure and apparatus for intra-prediction on the screen. The present invention relates to a method and apparatus for intra-prediction. The intra-prediction method for a decoder, according to the present invention, comprises the steps of entropy decoding of a received bitstream, the generation of reference pixels to be used in the intra-prediction of a prediction unit; the generation of a prediction block from the reference pixels, based on a prediction mode for the prediction unit, and the reconstruction of an image from the prediction block and a residual block, which is obtained as a result of the entropy coding, wherein the reference pixels and/or the pixels of the prediction block are predicted based on a base pixel, and the predicted pixel value can be the sum of the pixel value of the base pixel and the difference between the pixel values of the base pixel and the generated pixel. (Machine-translation by Google Translate, not legally binding) 公开号:ES2612388A1 申请号:ES201631081 申请日:2012-05-14 公开日:2017-05-16 发明作者:Jae Cheol Kwon;Joo Young Kim 申请人:KT Corp; IPC主号:
专利说明:
5 10 fifteen twenty 25 30 DESCRIPTION Procedure and apparatus for intra-prediction on screen Technical field The present invention relates to a video processing technique and, more specifically, to an intra-prediction procedure in the encoding / decoding of video information. Background Recently, the demands for high resolution and high quality images have increased in several fields of applications. As the images have higher resolution and higher quality, the amount of information in the images also increases. Consequently, when video data is transferred using media such as existing broadband, wired and wireless lines, or stored in conventional storage media, the transfer and storage costs of the video data increase. Therefore, high-efficiency video compression techniques can be used to effectively transmit, store or reproduce images with superior resolution and superior quality. Divulgation Technical problem An aspect of the present invention is to provide an effective intra-prediction realization process in a texture with directionality, in consideration of the variations of the reference pixels of the neighboring blocks. Another aspect of the present invention is to provide a method of performing flat prediction taking into account variations in pixel values of the blocks adjacent to a prediction block when performing intra-prediction. 5 10 fifteen twenty 25 30 Another aspect of the present invention is to provide a method of generating a reference pixel on the basis of an intra-modal neighboring block, at a position of a neighboring pixel in inter-prediction mode, and using the pixel of reference for intra-prediction when restricted intra-prediction (CIP) is used. Another aspect of the present invention is to provide a method of generating a reference pixel taking into account variations in the pixel value when the reference pixel is generated based on an intra-modal neighboring block, in a position of Neighbor pixel in inter-prediction mode. Technical Solution An embodiment of the present invention provides an intra-prediction method of an encoder, including the method of generating reference pixels for intra-prediction with respect to an input prediction unit, the determination of an intra-modality for prediction unit, the generation of a prediction block based on the reference pixels and the intra-modality, and the generation of a residual block for the prediction unit and the prediction block, in which at least one of the pixels reference and prediction block pixels are predicted on the basis of a base pixel, and a pixel value of the predicted pixel is a sum of a pixel value of the base pixel and a variation in the pixel value between the base pixel and the generated pixel. A reference pixel of a neighboring block, arranged in a upper left corner of the prediction block, can be set as a first base pixel, a value obtained by applying a variation in the pixel value between the first base pixel and a lower pixel between the reference pixels of a neighboring block, arranged in a left boundary of the prediction block, and a variation in the pixel value between the first base pixel and a rightmost pixel between the pixels of reference of a neighboring block, arranged in an upper limit of the prediction block with respect to the base pixel, can be set as a pixel value of a second base pixel, such as a diagonal pixel in a lower right corner of the block of prediction, and the pixel values of the diagonal pixels of the prediction block can be predicted from the first base pixel and the second base pixel. 5 10 fifteen twenty 25 30 Here, the non-diagonal pixels of the prediction block are predicted by interpolation or extrapolation using the diagonal pixels and pixels of the neighboring blocks at the upper and / or left limits of the prediction block. In addition, a reference pixel of a neighboring block, arranged in a upper left corner of the prediction block, can be set as the base pixel, and a value obtained by applying a variation in the pixel value between the pixel of base and a neighboring pixel, arranged in the same row as a prediction target pixel between the reference pixels of a neighboring block, arranged in a left boundary of the prediction block, and a variation in the pixel value between the pixel of base and a neighboring pixel, arranged in the same column as the prediction target pixel, between the reference pixels of a neighboring block, arranged in an upper limit of the prediction block with respect to the base pixel, can be predicted as a pixel value of the prediction target pixel. In addition, a pixel arranged in the same row or column as a prediction target pixel, between the pixels of the neighboring blocks, arranged in the left or upper limit of the prediction block, can be set as the base pixel, and a value obtained by applying a variation in the pixel value, between the base pixel and the prediction pixel, for the base pixel can be predicted as a pixel value of the prediction target pixel. Here, the prediction target pixel can be a diagonal pixel of the prediction block, and a non-diagonal pixel of the prediction block can be predicted by interpolation, using the diagonal pixel and the pixels of the neighboring blocks. The intra-prediction procedure may also include the generation of a reference pixel, arranged in a boundary between an inter-modal block and the prediction unit when a block neighboring the prediction unit is the inter-modal block, in the that a pixel arranged in a limit of the prediction unit, between the pixels of an intramodal block, arranged on a left side or on the lower side of the reference pixel, can be established as a first base pixel, a pixel arranged in the boundary of the prediction unit, between the pixels of an intra-modal block, arranged on a right side or an upper side of the reference pixel, can be set as a second base pixel, and 5 10 fifteen twenty 25 30 The reference pixel can be generated based on a distance between the first base pixel and the reference pixel, and a distance from the second base pixel to the reference pixel. Aqw, a pixel value of the first base pixel can be an average pixel value of the pixels arranged in the Kmite of the prediction unit, between the pixels of the intramodal block to which the first base pixel belongs, and a value of The pixel of the second base pixel can be an average pixel value of the pixels arranged in the limit of the prediction unit, between the pixels of the intra-modal block to which the second base reference belongs. In addition, a pixel value of the first base pixel may be a pixel value of the reference pixel when an intra-modal block is arranged only on the left side or on the lower side of the reference pixel, and a pixel value of the The second base pixel can be a pixel value of the reference pixel, when an intra-modal block is arranged only on the right side or the upper side of the reference pixel. Another embodiment of the present invention provides an intra-prediction method of a decoder, including the entropy decoding method of a received bit stream, the generation of a reference pixel used for the intra-prediction of a prediction unit, the generation of a prediction block from the reference pixel on the basis of a prediction mode for the prediction unit, and the reconstruction of an image from a residual block obtained by entropy decoding, and the prediction block , in which at least one of the reference pixels and the pixels of the prediction block is predicted on the basis of a base pixel, and a pixel value of the predicted pixel is a sum of a pixel value of the base pixel and a variation in the pixel value between the base pixel and the generated pixel. A reference pixel of a neighboring block, arranged in a upper left corner of the prediction block, can be set as a first base pixel, a value obtained by applying a variation in the pixel value between the first base pixel and a lower pixel between the reference pixels of a neighboring block, arranged in a left boundary of the prediction block, and a variation in the pixel value between the first base pixel and a rightmost pixel between the pixels of reference of a neighboring block, arranged in an upper limit of the prediction block with respect to the base pixel, can be set as a pixel value of a second base pixel, such as a pixel in 5 10 fifteen twenty 25 30 diagonal in a lower right corner of the prediction block, and the p ^ xel values of the diagonal x of the prediction block can be predicted from the first base pixel and the second base pixel. Here, the non-diagonal pixels of the prediction block can be predicted by interpolation or extrapolation, using the diagonal pixels and pixels of the neighboring blocks in the upper and / or left limits of the prediction block. A reference pixel of a neighboring block, arranged in a upper left corner of the prediction block, can be set as the base pixel, and a value obtained by applying a variation in the pixel value, between the base pixel and a neighboring pixel, arranged in the same row as a prediction target pixel, between the reference pixels of a neighboring block, arranged in a left boundary of the prediction block, and a variation in the pixel value, between the pixel of base and a neighboring pixel, arranged in the same column as the prediction target pixel, between the reference pixels of a neighboring block, arranged in an upper limit of the prediction block with respect to the base pixel, can be predicted as a pixel value of the prediction target pixel. In addition, a pixel arranged in the same row or column as a prediction target pixel, between the pixels of the neighboring blocks, arranged in a left or upper limit of the prediction block, can be set as the base pixel, and a value obtained by applying a variation in the pixel value, between the base pixel and the prediction pixel, for the base pixel can be predicted as a pixel value of the prediction target pixel. Here, the prediction target pixel can be a diagonal pixel of the prediction block, and a non-diagonal pixel of the prediction block can be predicted by interpolation, using the diagonal pixel and the pixels of the neighboring blocks. The intra-prediction may also include the generation of a reference pixel, arranged in a boundary between an intra-modal block and the prediction unit, when a block neighboring the prediction unit is the inter-modal block, in which a pixel arranged in a limit of the prediction unit, between the pixels of an intra-modal block, arranged on a left side or on a lower side of the reference pixel, can be set as a first 5 10 fifteen twenty 25 30 base pixel, a pixel arranged in the Kmite of the prediction unit, between the pixels of an intra-modal block, arranged on a right side or an upper side of the reference pixel, can be set as a second base pixel, and the reference pixel can be generated based on a distance between the first base pixel and the reference pixel, and a distance between the second base pixel and the reference pixel. Here, a pixel value of the first base pixel can be an average pixel value of pixels arranged at the limit of the prediction unit, between the pixels of the intramodal block to which the first base pixel belongs, and a pixel value of the second base pixel can be an average pixel value of the pixels arranged in the limit of the prediction unit, between the pixels of the intra-modal block to which the second base reference belongs. In addition, a pixel value of the first base pixel may be a pixel value of the reference pixel when an intra-modal block is arranged only on the left side or on the lower side of the reference pixel, and a pixel value of the The second base pixel can be a pixel value of the reference pixel, when an intra-modal block is arranged only on the right side or the upper side of the reference pixel. The decoder can acquire an instruction to generate the prediction block pixels, based on the base pixel, by entropy decoding. In addition, the decoder can acquire an instruction to generate the reference pixels, based on the base pixel, by entropy decoding. Advantageous effects As described above, according to the present invention, intra-prediction on a texture with directionality can be effectively achieved, in consideration of the variations of the reference pixels of the neighboring blocks. In addition, the flat prediction can be carried out taking into account the variations in the pixel values of the neighboring blocks with respect to a prediction block, thus improving the effectiveness of the prediction. In addition, when restricted intra-prediction (CIP) is used, a reference pixel is generated based on an intra-modal neighboring block, at a position of a neighboring pixel 5 10 fifteen twenty 25 30 intra-modal, and is used for intra-prediction, taking into account variations in the pixel value, thereby improving the effectiveness of the prediction. Description of the drawings FIG. 1 is a block diagram illustrating a configuration of a video encoder according to an exemplary embodiment of the present invention. FIG. 2 is a block diagram schematically illustrating a configuration of an intra-prediction module, according to an exemplary embodiment of the present invention. FIG. 3 is a block diagram illustrating a configuration of a video decoder, according to an exemplary embodiment of the present invention. FIG. 4 schematically illustrates a flat prediction procedure. FIG. 5 schematically illustrates an alternative flat prediction procedure. FIG. 6 schematically illustrates that a diagonal pixel of a current prediction block is predicted first. FIG. 7 schematically illustrates a procedure for obtaining other pixel values in the prediction block, based on the diagonal pixel. FIG. 8 schematically illustrates a procedure for predicting a pixel value taking into account a reference pixel value and a variation with respect to a reference pixel. FIG. 9 schematically illustrates a procedure for obtaining, first, diagonal pixels of a prediction block and then, the pixel values of the remaining pixels. FIG. 10 schematically illustrates that diagonal pixels are obtained first and that pixels other than diagonal pixels are obtained in the same procedure that is used for diagonal pixels. 5 10 fifteen twenty 25 30 FIG. 11 schematically illustrates a CIP procedure. FIG. 12 schematically illustrates an alternative CIP procedure. FIG. 13 schematically illustrates that a system according to the present invention carries out the CIP in consideration of the variations in the pixel value. FIG. 14 is a flow chart that schematically illustrates an operation of the encoder in the system according to the present invention. FIG. 15 illustrates a prediction direction of an intra-prediction mode. FIG. 16 is a flowchart that schematically illustrates an operation of the decoder in the system according to the present invention. Invention Mode Although the elements shown in the drawings are shown independently, in order to describe different characteristics and functions of a video encoder / decoder, such a configuration does not indicate that each element is constructed by a hardware component or component of software separately. That is, the elements are arranged independently and at least two elements can be combined into a single element, or a single element can be divided into a plurality of elements to perform functions. It should be noted that the embodiments in which some elements are integrated into a combined element and / or an element is divided into multiple separate elements, are included in the scope of the present invention, without departing from the essence of the present invention. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Equal reference numbers in the drawings refer to equal elements throughout their length, and redundant descriptions of equal elements will be omitted in this document. FIG. 1 is a block diagram illustrating a configuration of a video encoder according to an exemplary embodiment of the present invention. Referring to 5 10 fifteen twenty 25 30 FIG. 1, the video encoder includes an image partition module 110, an inter-prediction module 120, an intra-prediction module 125, a transformation module 130, a quantization module 135, a quantization module 140 , an inverse transformation module 145, an unblocking filter 150, a memory 160, a rearrangement module 165 and an entropy coding module 170. The image partition module 110 can receive the input of a current image and divide the image into at least one coding unit. An encoding unit is an encoding unit performed by the video encoder and may also be called a CU. A coding unit can be recursively subdivided, with a depth based on a tree structure with four branches. A coding unit that has a maximum size is called a maximum coding unit (LCU), and a coding unit with a minimum size, a minimum coding unit (SCU). An encoding unit may have an 8 x 8, 16 x 16, 32 x 32 or 64 x 64 size. The image partition module 110 may split or divide the encoding unit to generate a prediction unit and a unit of transformation. The prediction unit can also be called a PU, and the transformation unit can also be called a TU. In an inter-prediction mode, the inter-prediction module 120 can carry out the motion estimation (ME) and the motion compensation (MC). The inter-prediction module 120 generates a prediction block based on the information on at least one of the previous and subsequent images of the current image, which can be called a prediction between frames. The inter-prediction module 120 is provided with a partitioned prediction destination block and at least one reference block stored in the memory 160. The inter-prediction module 120 performs the estimation of the movement using the prediction destination block and the reference block The inter-prediction module 120 generates motion information that includes a motion vector (MV), a reference block index and a prediction mode as a result of the motion estimation. In addition, the inter-prediction module 120 performs the motion compensation using the movement information and the reference block. Here, the inter-prediction module 120 generates and emits a prediction block that corresponds to a block of 5 10 fifteen twenty 25 30 entry from the reference block. Motion information is encoded by entropy to form a compressed bit stream, which is transmitted from the video encoder to a video decoder. In an intra-prediction mode, the intra-prediction module 125 can generate a prediction block based on information about a pixel in the current image. Intra-prediction is also called intra-frame prediction. In the intra-prediction mode, a prediction target block and a reconstructed block, reconstructed by encoding and decoding, are introduced in the intra-prediction module 125. Here, the rebuilt block is an image that has not been subjected to the unlocking filter. The rebuilt block may be an earlier prediction block. FIG. 2 is a block diagram schematically illustrating an intra-prediction module configuration, according to an exemplary embodiment of the present invention. Referring to FIG. 2, the intra-prediction module includes a reference pixel generation module 210, an intra-prediction mode determination module 220 and a prediction block generation module 230. The reference pixel generation module 210 generates a reference pixel necessary for intra-prediction. The pixels in a vertical line at the right end of a left block neighboring a prediction target block and the pixels in a lower horizontal line of an upper block neighboring a prediction destination block are used to generate the pixel of reference. For example, when the prediction target block has a size N, 2N pixels in each of the left and top directions are used as reference pixels. The reference pixel can be used as is or by adaptive intra-search filtering (AIS). When the reference pixel is subjected to the AIS filtering, the information about the AIS filtering is signaled. The module 220 for determining intra-prediction mode receives the input of the prediction target block and the reconstructed block. The module 220 for determining the intra-prediction mode selects a mode that minimizes the amount of information to be encoded between the prediction modes, using the input image and the output information in the prediction mode. Here, you can use a preset cost function, or a Hadamard transform. 5 10 fifteen twenty 25 30 The prediction block generation module 230 receives the input of the prediction mode information and the reference pixel. The prediction block generation module 230 spatially predicts and compensates for a pixel value of the prediction target block, using the information on the prediction mode and a pixel value of the reference pixel, thereby generating a prediction block. The information about the prediction mode is entropy encoded to form a compressed bit stream along with the video data, and transmitted from the video encoder to the video decoder. The video decoder uses the information about the prediction mode when generating an intra-prediction block. With reference again to FIG. 1, a differential block is generated by difference between the prediction target block and the prediction block generated in the inter-prediction or intra-prediction mode, and is introduced in the transformation module 130. The transformation module 130 transforms the differential block into a transformation unit, to generate a transformation coefficient. A transformation block with a transformation unit has a tree structure with four branches, within maximum and minimum sizes and, therefore, is not limited to a predetermined size. Each transformation block has a signal that indicates whether or not the current block is divided into sub-blocks, where, when the signal is 1, the current transformation block can be divided into four sub-blocks. Discrete cosine transformation (DCT) can be used for transformation. The quantization module 135 can quantify the values transformed by the transformation module 130. A quantization coefficient can change based on a block or the importance of an image. The quantized transformation coefficient can be provided to the rearrangement module 165 and the de-quantization module 140. The rearrangement module 165 can change a two-dimensional (2D) block of the transformation coefficients into a one-dimensional vector (1D) of transformation coefficients, by scanning, in order to improve the efficiency in entropy codification. The rearrangement module 165 can change the scanning order based on stochastic statistics to improve the efficiency of entropy coding. 5 10 fifteen twenty 25 30 The entropy coding module 170 encodes the values obtained by the rearrangement module 165 by entropy, and the encoded values form a compressed bit stream, which is stored or transmitted through a network abstraction layer (NAL) . The decuantization module 140 receives and de-quantizes the transformation coefficients quantized by the quantization module 135, and the inverse transformation module 145 inversely transforms the transformation coefficients, thereby generating a reconstructed differential block. The reconstructed differential block is fused with the prediction block generated by the inter-prediction module 120 or the intra-prediction module 125, to generate a reconstructed block. The reconstructed block is provided to the intra-prediction module 125 and the unblocking filter 150. The unblocking filter 150 filters the reconstructed block to eliminate a distortion in a boundary between blocks that occurs in the encoding and decoding processes, and provides a filtered result to an adaptive loop filter (ALF) 155. The ALF 155 performs the filtering to minimize an error between the prediction target block and the reconstructed final block. The ALF 155 performs the filtering based on a value that results from comparing the reconstructed block filtered by the unblocking filter 150 and the current prediction target block, and a filter coefficient information in the ALF 155 is loaded into a header of slice and is transmitted from the encoder to the decoder. The memory 160 can store the final reconstructed block, obtained through ALF 155, and the stored reconstructed (final) block can be provided to the inter-prediction module 120 to carry out the inter-prediction. FIG. 3 is a block diagram illustrating a configuration of a video decoder according to an exemplary embodiment of the present invention. Referring to FIG. 3, the video decoder includes an entropy decoding module 310, a rearrangement module 315, a decoupling module 320, a reverse transformation module 325, an inter-prediction module 330, an intra-prediction module 335, an unlocking filter 340, an ALF 345 and a memory 350. Entropy decoding module 310 receives a compressed bit stream from an NAL. The entropy decoding module 310 decodes the bit stream by entropy 5 10 fifteen twenty 25 30 received, and also decodes by entropy a prediction mode and the motion vector information, if the bitstream includes the prediction mode and the motion vector information. A transformation coefficient, or differential signal, decoded by entropy is provided to the 315 rearrangement module. The rearrangement module 315 inversely scans the transformation coefficient, or differential signal, to generate a two-dimensional block of transformation coefficients. The decuantization module 320 receives and de-quantifies the transformation coefficients decoded by entropy and rearranged. The inverse transformation module 325 inversely transforms the quantized transformation coefficients to generate a differential block. The differential block can be fused with a prediction block generated by inter-prediction module 330 or intra-prediction module 335, to generate a reconstructed block. The reconstructed block is provided to the intra-prediction module 335 and the unlocking filter 340. The inter-prediction module 330 and the intra-prediction module 335 can perform the same operations as the inter-prediction module 120 and the intra-prediction module 125 of the video encoder. The unblocking filter 340 filters the reconstructed block to eliminate a distortion in a boundary between blocks that occurs in the encoding and decoding processes, and provides a filtered result to the ALF 345. The ALF 345 performs the filtering to minimize an error. between the prediction target block and the finally rebuilt block. Memory 160 may store the reconstructed final block obtained through ALF 345, and the stored reconstructed (final) block may be provided to the inter-prediction module 330 to carry out the inter-prediction. Meanwhile, in an area with insignificant changes in texture, for example, a monotonous background of the sky or the sea, flat intra-prediction is used to further improve coding efficiency. Intra-prediction is classified into directional prediction, DC prediction and flat prediction, where flat prediction can be an expanded concept of DC prediction. Although the flat prediction can be broadly included in the DC prediction, the flat prediction can cover a prediction procedure that the DC prediction does not address. 5 10 fifteen twenty 25 30 For example, the DC prediction is preferable for a uniform texture, while the flat prediction is effective for the prediction of block, in a pixel, of values having directionality. The present specification illustrates a procedure for improving the efficiency of flat prediction with respect to a texture with directionality, using variations in the pixel values of the reference pixels of the neighboring blocks. FIG. 4 schematically illustrates a flat prediction procedure. Referring to FIG. 4 (A), a 425 pixel value of one pixel is predicted in a lower right corner of a current block 420. The pixel 425 pixel value in the lower right corner of the current block can be predicted as a DC value. Referring to FIG. 4 (B), the pixel values of the pixels located in a right limit of the current block are predicted, and the pixel values of the pixels located in a lower limit of the current block. For example, a pixel value 445 located on the right border of the current block can be predicted by linear interpolation of a pixel value 450 from a higher block and the DC value 425. In addition, a 435 pixel value located at the lower limit of the current block can be predicted by linear interpolation of a 430 pixel value of a left block and the DC 425 value. Referring to FIG. 4 (C), the pixel values of the remaining pixels, other than the pixel in the lower right corner, the pixels in the right limit and the pixels in the lower limit in the current block, can be predicted by bilinear interpolation, using the pixel values of the upper and left blocks and pixel values already predicted in the current block. For example, a pixel value 475 in the current block can be predicted by interpolation using a pixel value 460 from the upper block, a pixel value 455 from the left block, the predicted pixel value 445 located at the right limit of the block current and the 435 pixel value already predicted located in the lower limit of the current block. Referring to FIG. 4 (D), the prediction samples (predicted samples) obtained by the above process can be refined. For example, an X 485 pixel value in the current block can be refined using a higher sample T 480 value and a left sample L 490 value. Specifically, the X 'refined from X can be 5 10 fifteen twenty 25 30 get for X '= {(X << 1) + L + T + 1} >> 2. Here, "x << y" indicates that the entire expression of x complementary to the two moves arithmetically to the left in one unit binary y, while "x >> y" indicates that the entire expression of x complementary to the two moves arithmetically to the right in the binary unit y. FIG. 5 schematically illustrates an alternative flat prediction procedure. In the procedure of FIG. 5, the pixel values of the pixels located diagonally in a current pixel are predicted first, and the pixel values of the remaining pixels in the current block are predicted using the predicted pixel values. To facilitate the description, the pixels located diagonally from the top left to the bottom right, between the pixels that build the block, are called diagonal pixels below. Referring to FIG. 5 (A), the pixel values of the diagonal pixels 540 of a current block 510 are predicted using a pixel value 520 of a higher reference block and a pixel value 530 of a left reference block. For example, a pixel value of a diagonal pixel P in the current block can be obtained using a pixel value of a higher Ref pixel located in a boundary between the current block and the upper block between the pixels of the upper block, and a value of pixel of a pixel RefIzquierda located in the limit between the current block and the left block, between the pixels of the left block, by means of P = (RefIzquierda + RefSuperior +1) >> 1. Referring to FIG. 5 (B), the pixel values of the pixels in the current block 510, other than the diagonal pixels 540, can be obtained by linear interpolation, using the pixel value obtained in FIG. 5 (A) and the pixel values of the pixels of the upper and left blocks in the limits. For example, P1 can be obtained using the upper Ref pixel of the upper block and the diagonal pixel P obtained, using P1 = (Upper Ref * d2 + P * d1) / (d1 + d2). In addition, P2 can be obtained by P2 = (Left Ref * d3 + P * d4) (d3 + d4). Meanwhile, the flat prediction procedures illustrated in FIGS. 4 and 5 are effective for a uniform texture without directionality, while these procedures may have reduced efficiency in prediction in a case of a texture with directionality, such as luma pixels, in which the luminance essentially changes in one direction, by For example, a horizontal direction, but hardly changes in another direction, for example, a vertical direction. 5 10 fifteen twenty 25 30 Therefore, flat intra-prediction may be necessary, taking into account variations in pixel values. The flat intra-prediction according to the present invention selects or predicts a base pixel value and applies variations in pixel values, between a base pixel and a destination pixel, in the base pixel value, thereby predicting a pixel value of the destination pixel. Hereinafter, examples of the present invention will be described with reference to the drawings. Example 1 FIG. 6 schematically illustrates that a diagonal pixel Pii of a current prediction block is predicted first. Although FIG. 6 illustrates an 8 x 8 prediction block, to facilitate the description, the present invention can also be applied to a N x N prediction block, without being limited to the 8 x 8 prediction block. In Example 1 shown in FIG. 6, the diagonal pixels of the current prediction block are first predicted based on a reference pixel (Ri0 and / or R0j, 0 <i, j <8 in the case of an 8 x 8 prediction block) of a block reference neighbor to the current prediction block. That is, after the diagonal pixels Pii are obtained, other pixel values in the prediction block can be obtained by interpolation or extrapolation, using values (Rij) of reference pixels of the neighboring block, and the Pii. FIG. 7 schematically illustrates a procedure for obtaining the other pixel values in the prediction block based on the diagonal pixels. In the present invention, flat prediction is carried out in consideration of changes in pixel values. For example, as shown in FIG. 7 (A), when the reference pixel values increase both in an x direction (to the right) and in a y direction (down), the pixel values in the prediction block are also more likely to increase by one bottom right direction. In this case, a value of 5 10 fifteen twenty 25 30 p ^ xel of P88 in a lower right corner of the prediction block can be predicted first, and other pixels can be predicted based on the pfxel value of P88. To predict the value of P88, defining a pfxel value of the reference pfxel R00 in a top left corner of the current prediction block as a pfxel value of the base pfxel, a variation, between the base pfxel and the pfxel P88 of prediction destination in the prediction block, can be applied to the pfxel value of the base pfxel. For example, a pfxel value of the destination pfxel P88 can be obtained by Equation 1. To facilitate the description, the Rij, or the Pij, illustrated in the drawings and the memory are presented as Ru and Pu. [Equation 1] image 1 When P88 is obtained, the other diagonal pixels Pii can be obtained by Equation 2. [Equation 2] image2 In this case, since the present example illustrates the 8 x 8 prediction block, i may be 1, 2, ..., 8. Although Example 1 illustrates the 8 x 8 prediction block for ease of description, in a prediction block of N x N, the Pii can be obtained as Pii = R00 + (i / N) P88. As shown in FIG. 7 (B), even when the reference pixel value decreases both in the x direction (to the right) and in the y direction (down), a pfxel value of P88 in the lower right corner of the prediction block is you can get in consideration of the variations in decreasing pfxel values, and the other values 18 5 10 fifteen twenty 25 30 of the pixels can be predicted based on the pxel value of P88. In this case, P88 can be obtained by Equation 3. [Equation 3] image3 When P88 is obtained, the other diagonal pixels in the prediction block can be obtained by Equation 4. [Equation 4] image4 Here, i can be 1,2, ..., 8. As shown in FIG. 7 (C), when the reference pixel values increase in a higher right direction, the pixels located between the lower left and the upper right in the prediction block are first obtained based on variations in the values of pixels, unlike what happens in FIGS. 7 (A) and 7 (B). For example, a pfxel value of P81 is obtained in the lower left corner of the prediction block, and the values of the remaining pixels can be predicted based on the pfxel value of P81. In this case, P81 can be obtained by Equation 5. [Equation 5] image5 When P81 is obtained, the remaining diagonal pixels (from bottom left to top left) in the prediction block can be obtained by Equation 6. 5 10 fifteen twenty 25 30 [Equation 6] image6 Aqw, i can be 1,2, ..., 8. Also, as shown in FIG. 7 (D), when the reference pixel values increase in a lower left direction, the diagonal pixels located from the lower left to the upper right in the prediction block are first obtained based on variations in the values of Pixels For example, the pfxel value of P81 is obtained in the lower left corner of the prediction block, and the values of the remaining pixels can be predicted based on the pfxel value of P81. In this case, P81 can be obtained by Equation 7. [Equation 7] image7 When P81 is obtained, the remaining diagonal pixels (from the bottom left to the top left) in the prediction block can be obtained by Equation 8. [Equation 8] image8 Here, i can be 1,2, ..., 8. In view of the calculation loads, the approximation of the square root calculations for twenty Obtaining the diagonal pixels can be considered as in Equation 9. [Equation 9] 5 image9 Subsequently, the other pixel values in the prediction block can be obtained by interpolation or extrapolation, using the prediction values of the diagonal pixels, the higher reference pixel values and the left reference pixel values. 10 In FIGS. 7 (A) and 7 (B), the Pij pixels in the prediction block can be obtained by interpolation, using the diagonal pixels Pii and the reference pixels R of the neighboring block. Here, an interpolation shown in Equation 10 can be used. [Equation 10] 15 image10 or image11 20 Here, di is a distance from the pfxel R0j or Rj0 of the neighboring block, used for interpolation, to the prediction destination pfxel Pij, and d2 is a distance from the diagonal pfxel Pii, used for interpolation, to the pfxel Pij of prediction destination. In addition, in FIGS. 7 (C) and 7 (D), the pfxel Pi, obtained by interpolation between the pixels in the prediction block, can be obtained by Equation 11. 25 [Equation 11] image12 image13 d, + d ~ ) image14 or Here, i + j <9, and d1 is a distance between the pixel R0j or Rj0 of the neighboring block, used for interpolation, and the prediction destination pixel Pij, and d2 is a distance from the diagonal p ^ xel Pii , used for interpolation, up to the pixel Pij of prediction destination. Here, although Equation 11 is used for interpolation in order to obtain the pixel Pij of the prediction block, several interpolation procedures can be employed in the present invention, without being limited thereto. 10 Meanwhile, in FIGS. 7 (C) and 7 (D), there is a pixel Pe obtained by extrapolation between the pixels of the prediction block. Here, an extrapolation shown in Equation 12 can be used to obtain the pixel in the prediction block. [Equation 12] fifteen image15 or image16 20 In this case, i + j> 9 and P is a diagonal pixel used for extrapolation. Also, as described above, d1 and d2 are, respectively, a distance from the reference pixel to the prediction destination pixel Pij, and a distance from the pixel Pii to the prediction destination pixel Pij. 25 Example 2 FIG. 8 schematically illustrates another method of predicting a pixel value 5 10 fifteen twenty 25 30 taking into account a base pixel value and a variation with respect to a base pixel. Although FIG. 8 illustrates an 8 x 8 prediction block for ease of description, the present invention can also be applied to a N x N prediction block, without being limited to the 8 x 8 prediction block. FIG. 8 illustrates a reference pfxel P00, located in the upper left corner of the prediction block, as a base pfxel. In Example 2, a prediction target pixel Pij is obtained by applying vertical and horizontal variations between the reference pixel and the base pixel value. For example, the destination pixel Pij is obtained by Equation 13. [Equation 13] Pij = R00 + Ax + Ay Here, Ay = Ri0 - R00, ax = R0j - R00, and 1 <i, j <8 in the case of the 8x8 prediction block. For example, with reference to FIG. 8, a pfxel P33 is obtained by P33 = R00 + ax + Ay, according to Equation 7. Here, ax and Ay are the variations in the pfxel value in the x direction and the directions and from the base pixel, from R00 to P33 Alternatively, with reference to FIG. 8, a pfxel P76 is obtained by P76 = R00 + ax '+ Ay', according to Equation 13. Here, ax 'and Ay' are variations in the value of pfxel in the x direction and the directions and from the base pfxel, of R00 to P76. Example 3 FIG. 9 schematically illustrates another method of obtaining, first, pixels of the diagonal of a prediction block, and then the pixel values of the remaining pixels. Although FIG. 5 illustrates that diagonal pixels are obtained based on an average value of two pixels in a horizontal / vertical direction of a block neighboring the prediction block current, Example 3 shown in FIG. 9 carries the diagonal pixels taking into account the variations. Referring to FIG. 9 (A), the diagonal pixels of the prediction block are / or left of the prediction block. For example, the diagonal pixels Pii are predicted by Equation 14. [Equation 14] 10 For example, with reference to FIG. 9 (A), P33 can be predicted by P33 = R03 + Ay or 15 P33 = R30 + ax, according to Equation 14. ax and Ay are, respectively, variations in pixel values in the x direction of a pixel of base, R30 to P33, and in the direction and of a pixel of base, R03 to P33. Referring to FIG. 9 (B), other Pij pixels of the current block, other than diagonal pixels 20, can be predicted by linear interpolation, using the prediction values of the diagonal pixels and the reference pixels R00, R10 to R80 and R01 to R08 of the neighboring blocks in the upper and left limits of the current block. For example, a pixel Pij value can be predicted by Equation 15. 25 [Equation 15] 5 predict using pixel values of neighboring blocks located at the upper limits and image17 or image18 p .._ R0jxd2 + Piixdl J dl + d2 or image19 5 10 fifteen twenty 25 30 d1 is a distance from the p ^ xel R0j or Pi0 of the neighboring blocks, used for interpolation, to the pxx Pij of prediction destination, and d2 is a distance from the diagonal pixel Pii, used for interpolation, up to the prediction target pixel Pij. Example 4 FIG. 10 schematically illustrates that diagonal pixels are obtained first and that pixels other than diagonal pixels are obtained in the same procedure that is used for diagonal pixels. In FIG. 10, diagonal pixels can be predicted in the same manner as illustrated in FIG. 9. Therefore, with reference to FIG. 10 (A), a diagonal pixel P33 of a current prediction block can be predicted by P33 = R03 + A or P33 = R30 + ax. Subsequently, other Pij pixels of the current block, other than the diagonal pixels, can be predicted by linear interpolation, using the prediction values of the diagonal pixels and the reference pixels R00, R10 to R80 and R01 to R08 of the neighboring blocks in the upper and left limits of the current block. Here, you can use the same procedure that is used to obtain diagonal pixels. For example, a pixel Pij can be predicted by Equation 16. [Equation 16] Pij = R0j + Ay Pij = Ri0 + Ax Here, Ay = Ri0 - R00, ax = R0j - R00 and 1 <i, j <8 in the case of the 8x8 prediction block. 5 10 fifteen twenty 25 30 For example, with reference to FIG. 10, P37 can be obtained by P37 = R07 + A or P37 = R70 + ax, according to Equation 16. Meanwhile, the accumulation of minor errors resulting from the arithmetic of integers by the encoder or decoder, for a long time, can cause a serious error. In addition, when a transmission error occurs in a block neighboring a current block, a mismatch arises between the encoder and the decoder, or the propagation of the error. For example, when an error occurs in the neighboring block, the pixel values are changed over a limit of the neighboring block. In this case, when the decoder uses a pixel with a pixel value changed as a reference pixel, the error propagates to the current block. Therefore, a tool is needed to avoid such a problem, for example, a codification tool such as restricted intra-prediction (CIP). FIG. 11 schematically illustrates a CIP procedure. In the procedure of FIG. 11, if there is any block, in inter-prediction mode, neighboring a current T macroblock, only one intra-prediction DC mode is used, and a DC prediction value is set at 128. Here, a pixel value of a block predicted by the inter-prediction mode between neighboring blocks is not used as a reference pixel value. Therefore, in this procedure, a DC prediction mode is mandatory, excluding even the available information, for example, neighboring pixels in intra-prediction mode. FIG. 12 schematically illustrates an alternative CIP procedure. In the procedure of FIG. 12, a pixel value of a block predicted in the intra-prediction mode between neighboring blocks is used as a reference pixel value, and a pixel value of a block predicted in the inter-prediction mode is obtained using neighboring blocks in intra-prediction mode. Therefore, not only the DC mode, but also other intra-prediction modes can be used. 5 10 fifteen twenty 25 30 Referring to FIG. 12, between the neighboring blocks to a current prediction T block, the pixel values 1210, 1220 and 1230 of the A, B, D, E, F, H and I blocks, predicted by the inter-prediction mode, are obtained using block pixels predicted by the intra-prediction mode. For example, when the predicted pixels of the intra-prediction mode are present on both the right and left sides of a destination sample of the inter- prediction, a PT-pixel value of a block predicted by the inter-prediction mode is obtained by Equation 17. [Equation 17] image20 Here, PT is an inter-prediction target sample, PLB is a left or lower intra-prediction sample and PRA is a right or higher intra-prediction sample. In addition, when an intra-prediction sample is present only on either side of the inter-prediction target sample, a pfxel PT value of a block predicted by the inter-prediction mode is obtained by Equation 18. [Equation 18] image21 image22 or image23 The procedure of FIG. 12 uses the intra-prediction mode more adequately than the procedure of FIG. 11, but uses an average value of the available pfxel values in the intra-prediction mode, or a pfxel value available by itself in the intra-prediction mode, such as a pfxel value of a neighboring block predicted in the inter-prediction mode, regardless of variation in pixel values. Therefore, a CIP procedure that takes into account variations in pfxel values is needed. 5 10 fifteen twenty 25 30 Example 5 FIG. 13 schematically illustrates that a system according to the present invention performs the CIP in consideration of the variations of pfxel values. The procedure of FIG. 13, using variations in the pixel values of both pixels for interpolation, achieves a more accurate prediction of a target pixel value than the procedure of FIG. 12, which uses an average value of both pfxel values as a pfxel value to be obtained. For example, a PT pixel, between the values 1310, 1320 and 1330 of pixels to be obtained, can be obtained by Equation 19. [Equation 19] T d + d2 Here, PT is a prediction target sample, PLB is a left or lower intra-prediction sample and PRA is a right or higher intra-prediction sample. In addition, as shown in FIG. 13, d1 is a distance between PLB and PT and d2 is a distance between Pra and Pt. For example, with reference to FIG. 13, PT1 can be obtained from (PLB1 * d21 + PRA1 * d11) / (d11 + d21), and Pt2 can be obtained from (PLB2 * d22 + PRA2 * d12) / (d12 + d22). If an intra-prediction sample to be used for interpolation is present, either on the right and left sides, or on the upper and lower sides, of the prediction target PT sample only, PT = PLB or PT = PRA. Also, if there is no predicted block in the intra-prediction mode neighboring the prediction target T block, a pfxel value in the same position as in a previous image can be copied for use as a reference pfxel value . The average values of the intra-pixels in the limit can be used as PLB or PRA values. For example, in FIG. 3, when PT is located in a lower row 1320 of 5 10 fifteen twenty 25 30 p ^ xeles of a block E or a block D, an average value of the four lowest pixels of a block C in the intra-prediction mode can be used as PRA, and an average value of the eight p ^ xeles plus a The right of a G block can be used as a PLB. In this case, a reference point of d1 is an upper pixel between the rightmost pixels of the G block, and a reference point of d2 is a leftmost pixel between the lower pixels of the C block. In addition, linear interpolation gives a search effect on the pixels at the limits and, therefore, adaptive intra-search (AIS) can be deactivated. Here, in the DC prediction mode, pixel filtering at a limit of the prediction block can be activated. FIG. 14 is a flow chart schematically illustrating an operation of the encoder in the system according to the present invention. Referring to FIG. 14, a new prediction unit of a current image (S1410) is entered. The prediction unit (PU) can be a basic unit for intra-prediction and inter-prediction. The prediction unit may be a smaller block than a codification unit (CU) and may have a rectangular shape, not necessarily a square shape. The intra-prediction of the prediction unit is carried out basically by a block of 2N x 2N or N x N. Subsequently, a reference pixel necessary for intra-prediction (S1420) is obtained. The pixels in a vertical line farther to the right of a left block neighboring a current prediction block and the pixels in a lower horizontal line of a higher block neighboring the current prediction block are used to generate the reference pixel. When the prediction block has a size N, a total of 2N pixels of the left and upper blocks are used as reference pixels. Here, the pixels in the rightmost vertical line of the left block neighboring the current prediction block and the pixels in the lower horizontal row of the upper block neighboring the current prediction block can be used as the reference pixels, as well as They are either through the search. 5 10 fifteen twenty 25 30 When the search is used, the search information can also be signaled to the decoder. For example, when the search is carried out, an AIS filter can be used, in which the filter coefficients [1, 2, 1] or [1, 1, 4, 1, 1] can be used. Between these two coefficients, the second filter coefficient can provide a stronger limit. As mentioned above, information that includes using or not using a filter, a type of filter to be used and a filter coefficient can be signaled to the decoder. Meanwhile, when the CIP is used to generate the reference pixel, a signal value_CIP is set to 1. When the CIP is applied, only the pixels of the neighboring blocks encoded in the intra-prediction mode are used as reference pixels , and the pixels of the neighboring blocks encoded in the inter-prediction mode are not used as reference pixels. In this case, as shown in FIG. 13, the pixels (target prediction samples) corresponding to the pixel positions of the neighboring blocks encoded in the inter-prediction mode are generated as reference pixels, by interpolation of the neighboring reference pixels encoded in the mode intra-prediction, or the neighboring reference pixels encoded in the intra-prediction mode are copied and used as reference pixels corresponding to the pixel positions of the neighboring blocks encoded in the inter-prediction mode. For example, when the prediction pixels, in the intra-prediction mode, are present on both the right and left sides, and the lower sides of an inter-prediction destination sample, the destination prediction PT sample, located at a block predicted in the inter-prediction mode can be obtained by Equation 11. Also, when an intra-prediction sample is present only on each side of the destination prediction sample, the destination prediction PT sample, located at A predicted block location in the inter-prediction mode can be obtained by Equation 12. In Equation 11 and / or Equation 12, the average values of the corresponding pixels of the intra-prediction mode can be used as values of PLB and PRA. If there is no predicted neighbor block in the intra-prediction mode, a pixel value in the same position as in a previous image can be copied for use as a reference pixel value. Since linear interpolation gives a smoothing effect on pixels in However, it may be effective to disable AIS when using CIP. Subsequently, an intra-prediction modality (S1430) is determined. 5 The intra-prediction mode is determined by a prediction unit (PU), in which an optimal prediction mode is determined in view of the relationship between the required bit rate and the magnitude of the distortion. For example, when the optimization of the speed distortion (RDO) is activated, 10 a mode can be selected to minimize the cost J = R + rD (R is the bit rate, D is the magnitude of the distortion and r is a Lagrange variable). Here, a thorough local decoding is needed, in which case the complexity can increase. 15 When the RDO is deactivated, a prediction mode can be selected to minimize a mean absolute difference (MAD), subjecting a prediction error to the Hadamard transformation. Table 1 illustrates a certain number of prediction modalities with respect to a luma component, according to the size of a prediction unit block. [Table 1] Size of Number of modalities of block prediction 4x4 17 8x8 3. 4 16 x 16 3. 4 32 x 32 3. 4 64 x 64 3 25 FIG. 15 illustrates a prediction direction of an intra-prediction mode. Referring to FIG. 15, a modality number 0 is a vertical modality, in which is carried out the prediction in a vertical direction, using a pixel value of 31 5 10 fifteen twenty 25 30 a neighboring block. A modality number 1 is a horizontal modality in which the prediction is carried out in a horizontal direction, using a pixel value of a neighboring block. A mode number 2 is a DC mode, in which a prediction block is generated using an average pixel value of a current prediction target block, for example, a luma value in the case of luma pixels, and a chroma value in the case of chroma pixels. In other embodiments, which are shown in FIG. 15, the prediction is carried out using the pixel values of the neighboring blocks, according to the corresponding angles. In DC mode, the higher prediction pixels and the left-most prediction pixels can be filtered to improve the prediction efficiency. Here, the filtering intensity can become higher for a smaller block. The other internal pixels in the current prediction block may not be filtered. Meanwhile, a flat mode can be used to reflect the directionality, rather than the DC mode. In the flat mode, a value of the Flat_Signal, between the information transmitted from the encoder to the decoder, is set to 1. When the flat mode is used, the DC mode is not used. Therefore, when the DC mode is used instead of the flat mode, the value of the Flat_Signal is set to 0. When the flat mode is used, the same prediction procedures can be used, as described above in FIGS. 6 to 10. Here, the decoder can perform an RDO operation, described above, in order to select the optimal procedure. If necessary, two or more procedures may be used together between the above procedures. The encoder signals to the decoder information about which procedure selects the encoder from the prediction procedures in the flat mode illustrated in FIGS. 6 to 10 With respect to a reference pixel of a chroma component, a unified directional intra-prediction (UDI) of a luma block can be used, since it is in a modality number 4, what is called a DM modality. In a mode number 0, a prediction block is generated using a linear relationship between luma and chroma, which is called a linear model (LM) mode. A modality number 1 is a vertical modality, in which the prediction is carried out in the vertical direction, and 5 10 fifteen twenty 25 30 corresponds to the number 0 of luma modality. A modality number 2 is a horizontal lhea, in which the prediction is carried out in the horizontal direction, and corresponds to the luma modality number 1. A modality number 3 is a DC modality, in which a prediction block is generated using an average chroma value of a current prediction target block, and corresponds to the luma modality number 2. Returning to FIG. 14, the encoder encodes a prediction mode of the current block (S1440). The encoder encodes a prediction mode for a luma component block and a chroma component block of the current prediction block. Here, since the prediction mode of the current prediction destination block is highly correlated with a prediction mode of a neighboring block, the current prediction destination block is encoded using the prediction mode of the neighboring block, reducing by it the amount of bits. In addition, a more probable mode (MPM) of the current prediction destination block is determined and, consequently, the prediction mode of the current prediction destination block can be encoded using the MPM. Subsequently, a pfxel value of the current prediction block and a differential value, in one pfxel, are obtained for the pfxel value of the prediction block, thereby generating a residual signal (S1450). The generated residual signal is transformed and encoded (S1460). The residual signal can be encoded using a transformation core, in which the transformation coding core has a size of 2 x 2, 4 x 4, 8 x 8, 16 x 16, 32 x 32 or 64 x 64. A transformation coefficient C is generated for the transformation, which can be a two-dimensional block of transformation coefficients. For example, for a block of nxn, a transformation coefficient can be calculated by Equation 20. [Equation 20] image24 Here, C (n, n) is a matrix of transformation coefficients of n * n, T (n, n) is a 33 5 10 fifteen twenty 25 30 Nuclear transformation matrix of n * n and B (n, n) is a matrix of n * n for a prediction target block. When m = hN, n = 2N and h = 1/2, a transformation coefficient C for a block of m * n or n * m can be obtained by two procedures. First, the differential block of m * n or n * m is divided into four blocks of m * m and a transformation core is applied to each block, thereby generating the transformation coefficient. Alternatively, a transformation core is applied to the differential block of m * n or n * m, thereby generating the transformation coefficient. The encoder determines which one to transmit, between the residual signal and the transformation coefficient (S1470). For example, when the prediction is carried out properly, the residual signal can be transmitted as it is, without transformation codification. The determination as to which to transmit, between the residual signal and the transformation coefficient, can be carried out by means of the RDO, or the like. The cost functions, before and after the transformation coding, are compared to minimize costs. When a type of signal to be transmitted is determined, that is, the residual signal or the transformation coefficient, for the current prediction block, a type of the transmitted signal is also signaled to the decoder. Subsequently, the encoder scans the transformation coefficient (S1480). A quantized two-dimensional block of transformation coefficients can be converted into a one-dimensional vector of transformation coefficients by scanning. The scan coefficient and the intra-prediction mode are encoded by entropy (S1490). The encoded information is shaped as a compressed bit stream, which can be transmitted or stored through an NAL. FIG. 16 is a flow chart schematically illustrating an operation of the decoder in the system according to the present invention. Referring to FIG. 16, the decoder entropy decodes a received bit stream (S1610). Here, a type of block can be obtained from a table of 5 10 fifteen twenty 25 30 variable length coding (VLC), and a prediction mode of a current decoding destination block can be obtained. When the received bit stream may include lateral information necessary to decode, such as information about an encoding unit, a prediction unit and a transformation unit, information about AIS filtering, information about the limitation of a counter prediction modality, information on unused prediction modalities, information on rearrangement of prediction modalities, information on transformation procedures and information on scanning procedures, and lateral information, is decoded by entropy, along with the bit stream The decoded information can confirm whether a signal transmitted for the current decoding destination block is a residual signal or a transformation coefficient for a differential block. A residual signal, or one-dimensional vector of transformation coefficients for the differential block, is obtained for the current decoding destination block. Subsequently, the decoder generates a residual block (S1620). The decoder inversely scans the residual signal decoded by entropy, or the transformation coefficient, to generate a two-dimensional block. Here, a residual block can be generated from the residual signal, and a two-dimensional block of transformation coefficients can be generated from the transformation coefficient. The transformation coefficients are quantized. The quantized transformation coefficients are inversely transformed, and the residual block for the residual signal is generated by the inverse transformation. The inverse transformation of a block of n * n can be expressed by Equation 11. The decoder generates reference pixels (S1630). Here, the decoder generates the reference pixel by referring to the information on whether the AIS filtering is applied or not, and on a type of filter used signaled and transmitted by the encoder. In the same way in the coding process, the pixels in a vertical line to the right of a left block already decoded and reconstructed, and neighboring the destination block of current decoding, and the pixels in a lower horizontal line of a block higher 5 10 fifteen twenty 25 30 neighbor to the decoding destination block, they are used for the generation of the reference pixel. Meanwhile, when a value of the Senal_CIP, received by the decoder, is set to 1, which means that the encoder uses the CIP for a destination image, the decoder generates the reference pixel accordingly. For example, only the pixels of the neighboring blocks encoded in the intra-prediction mode are used as reference pixels, while the pixels of the neighboring blocks encoded in the inter-prediction mode are not used as reference pixels. In this case, as illustrated in FIG. 6, the pixels (target prediction samples) corresponding to the pixel positions of the neighboring blocks encoded in the inter-prediction mode are generated as reference pixels by interpolation of the neighboring reference pixels, encoded in the mode intra-prediction, or the neighboring reference pixels encoded in the intra-prediction mode can be copied and used as reference pixels corresponding to the pixel positions of the neighboring blocks encoded in the inter-prediction mode. For example, when the pixels of the intra-prediction mode are present on both the right and left sides, as well as on the upper and lower sides, of a destination inter-prediction sample, the destination prediction PT sample, located in a block predicted in the inter-prediction mode is obtained by Equation 17. Also, when an intra-predicted sample is only present on each side of the destination prediction sample, the destination prediction PT sample, located at A predicted block location in the inter-prediction mode can be obtained by Equation 18. In Equation 17 and / or Equation 18, the mean values of the corresponding pixels of the intra-prediction mode can be used as PLB or PRA values. If there is no predicted neighbor block in the intra-prediction mode, a pixel value in the same position as in a previous image can be copied for use as a reference pixel value. When the encoder uses the AIS filtering, that is, when the search is applied and, therefore, the AIS is activated, the decoder also performs an AIS filtering in the generation of the reference pixel, according to the generation procedure of reference pixels used by the encoder. The decoder can determine a filter coefficient based on filter type information among the information received. For example, when there are two filter coefficients [1, 2, 1] or [1, 1,4, 1, 1], a filter coefficient indicated in the filter type information can be used between the two filter coefficients. Next, a prediction block is generated for the 5 decoding target block, using the reference pixel and the entropy decoded prediction mode of the current decoding destination block (S1640). A process of generating the prediction block is the same as a process of determining the prediction mode, and generating the prediction block by 10 the encoder. When the prediction mode of the current block is a flat mode, a flat prediction procedure, used to generate the prediction block, can be identified by analyzing the signalized information. Here, the decoder can generate the prediction block based on the information identified according to a modality used, among the flat modes illustrated in FIGS. 6 to 10 15 Next, a reconstructed block is generated by adding, by a pixel, a pixel value of the prediction block and a pixel value of the differential block, that is, a reconstructed block (S1670).
权利要求:
Claims (3) [1] 5 10 fifteen twenty 25 30 35 1. A method of decoding a video signal, comprising: obtaining a residual coefficient of a current block from the video signal; obtain an inversely quantized residual coefficient, performing an inverse quantization on the residual coefficient of the current block; obtain a residual sample by performing an inverse transformation on the inversely quantized residual coefficient of the current block; make an intra-prediction for the current block based on reference samples of the current block, obtaining the reference samples based on neighboring samples adjacent to the current block; Y obtain a reconstruction sample referred to the current block, adding a prediction sample obtained by performing the intra-prediction, and the residual sample referred to the current block, wherein, when a sample not available for intra-prediction of the current block is present between neighboring samples adjacent to the current block, the unavailable sample replaces a sample located on the side of the sample not available between neighboring samples adjacent to the current block, and wherein the sample located on said side of the unavailable sample is located on a lower side of the unavailable sample when the unavailable sample is a left neighboring sample adjacent to the current block, and the sample located on said side of the sample Unavailable is located on the left side of the unavailable sample when the unavailable sample is an upper neighboring sample adjacent to the current block. [2] 2. The method of claim 1, wherein, when an intra-prediction mode of the current block corresponds to a flat mode, the prediction sample is obtained by a linear interpolation of the reference samples. [3] 3. The method of claim 2, wherein the reference samples comprise at least one between a left neighboring sample whose coordinate on the y axis is the same as that of the prediction sample, and a higher neighboring sample whose coordinate on the x-axis is the same as that of the prediction sample.
类似技术:
公开号 | 公开日 | 专利标题 ES2545039B1|2016-09-09|Procedure and apparatus for intra-prediction on screen ES2580278B1|2017-07-05|METHOD TO INDUCE A FUSION CANDIDATE BLOCK AND DEVICE USING THE SAME ES2541807T3|2015-07-24|Procedure and apparatus for decoding a video signal ES2459890A2|2014-05-12|Intra-prediction method, and encoder and decoder using same AU2016219672A1|2016-09-08|Method and apparatus for intra prediction within display screen
同族专利:
公开号 | 公开日 SE541011C2|2019-02-26| ES2450643R1|2014-12-11| US9154803B2|2015-10-06| ES2633153A2|2017-09-19| ES2596027R1|2017-03-17| US20170006308A1|2017-01-05| GB2556649B|2018-10-31| GB201712865D0|2017-09-27| KR20140019457A|2014-02-14| CN106851315B|2020-04-21| KR20140128903A|2014-11-06| CN107592531B|2020-04-24| ES2597459R1|2017-04-26| CN107786870A|2018-03-09| GB2506039A|2014-03-19| US20160112719A1|2016-04-21| US20160373764A1|2016-12-22| ES2597431R1|2017-03-17| CN107566833A|2018-01-09| PL231066B1|2019-01-31| ES2597459B1|2017-12-21| KR20150043278A|2015-04-22| CN107613296A|2018-01-19| RU2013152690A|2015-06-27| KR20140056199A|2014-05-09| GB2560394B|2018-12-05| CN103703773B|2017-11-07| CN108055537A|2018-05-18| US9288503B2|2016-03-15| WO2012161444A3|2013-01-17| US20170006291A1|2017-01-05| SE1550476A1|2015-04-22| ES2450643B1|2015-07-17| KR20130028815A|2013-03-20| KR20140059177A|2014-05-15| ES2545039B1|2016-09-09| PL407846A1|2015-05-25| CA2958027A1|2012-11-29| GB201321333D0|2014-01-15| ES2570027R1|2016-08-01| ES2597431A2|2017-01-18| ES2570027B1|2017-04-12| CN107592545B|2020-11-10| ES2597433R1|2017-04-18| CN107659816A|2018-02-02| CN107566832A|2018-01-09| CN107786870B|2020-11-10| KR20140135681A|2014-11-26| RU2628154C1|2017-08-15| ES2597432A2|2017-01-18| BR112013029931A2|2017-01-24| US9445123B2|2016-09-13| SE539822C2|2017-12-12| ES2597458A2|2017-01-18| ES2597432B1|2017-12-28| ES2596027B1|2017-12-27| CN107592532A|2018-01-16| CN107613296B|2020-11-10| CN107517377B|2020-08-21| ES2597458R1|2017-04-18| CN104378645A|2015-02-25| ES2597431B1|2018-02-27| AU2015261728A1|2015-12-17| AU2012259700B2|2015-10-01| KR101508291B1|2015-04-08| US20150139318A1|2015-05-21| ES2450643A2|2014-03-25| CA2958027C|2019-04-30| KR20140128904A|2014-11-06| KR20140056200A|2014-05-09| AU2015261728B2|2017-07-27| GB2506039B|2018-10-24| US9432669B2|2016-08-30| KR101453897B1|2014-10-23| ES2597433A2|2017-01-18| ES2612388B1|2017-11-24| CN107547894A|2018-01-05| CN107517378B|2020-09-08| CN107592546A|2018-01-16| US9432695B2|2016-08-30| RU2576502C2|2016-03-10| KR20140135678A|2014-11-26| SE537736C2|2015-10-06| CN107517377A|2017-12-26| ES2597458B1|2017-11-24| US9749640B2|2017-08-29| RU2628157C1|2017-08-15| KR101508894B1|2015-04-08| ES2633153B1|2018-10-10| WO2012161444A2|2012-11-29| CN106851315A|2017-06-13| ES2545039R1|2015-12-28| KR101552631B1|2015-09-11| CN107613295B|2020-05-12| CN107613295A|2018-01-19| CN107547894B|2020-11-10| CN108055537B|2020-11-10| EP2712192A2|2014-03-26| CN107517378A|2017-12-26| GB2560394A|2018-09-12| SE1651173A1|2016-09-01| KR20140135680A|2014-11-26| CN107517379A|2017-12-26| KR101508895B1|2015-04-08| US9749639B2|2017-08-29| CN107592531A|2018-01-16| CN103703773A|2014-04-02| ES2597433B1|2017-12-21| CN107566832B|2020-03-06| KR20140135679A|2014-11-26| KR101508486B1|2015-04-08| CN107659816B|2020-07-03| US20150146781A1|2015-05-28| SE1551664A1|2015-12-17| GB2559438A|2018-08-08| CN107786871A|2018-03-09| AU2012259700A1|2013-12-19| US9843808B2|2017-12-12| KR101453899B1|2014-10-23| KR101508292B1|2015-04-08| US9756341B2|2017-09-05| ES2597432R1|2017-03-23| ES2545039A2|2015-09-07| US20140105290A1|2014-04-17| SE1351441A1|2014-01-22| CA2836888A1|2012-11-29| SE537736C3|2016-04-12| CN107592530A|2018-01-16| GB201712867D0|2017-09-27| US20170006292A1|2017-01-05| RU2628160C1|2017-08-15| CN104378645B|2017-11-14| US20160173874A1|2016-06-16| US10158862B2|2018-12-18| GB2559438B|2018-11-14| EP2712192A4|2015-06-17| SE541010C2|2019-02-26| KR101383775B1|2014-04-14| US20170034518A1|2017-02-02| RU2628153C1|2017-08-15| CN107592532B|2020-04-28| SE1651172A1|2016-09-01| CN107786871B|2020-11-10| US20180054620A1|2018-02-22| CN107592545A|2018-01-16| KR101453898B1|2014-10-23| ES2596027A2|2017-01-04| CN107517379B|2020-09-04| GB2556649A|2018-06-06| ES2597459A2|2017-01-18| RU2628161C1|2017-08-15| ES2633153R1|2017-11-03| ES2570027A2|2016-05-13| SE538196C2|2016-04-05| GB201712866D0|2017-09-27| CA2836888C|2017-10-31| CN107592530B|2020-03-06| CN107592546B|2020-03-06| KR101458794B1|2014-11-07| CN107566833B|2020-11-10| US9584815B2|2017-02-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP0322955A1|1987-12-22|1989-07-05|Philips Electronics Uk Limited|Processing sub-sampled signals| US5068727A|1988-04-15|1991-11-26|U. S. Philips Corporation|Device for decoding signals representative of a sequence of images and high definition television image transmission system including such a device| US20090052555A1|2007-08-21|2009-02-26|David Mak-Fan|System and method for providing dynamic deblocking filtering on a mobile device| US20100235530A1|2009-02-11|2010-09-16|National Chiao Tung University|Control method of transmitting streaming audio/video data and architecture thereof| US5122873A|1987-10-05|1992-06-16|Intel Corporation|Method and apparatus for selectively encoding and decoding a digital motion video signal at multiple resolution levels| EP0322956B1|1987-12-22|1994-08-03|Koninklijke Philips Electronics N.V.|Video encoding and decoding using an adpative filter| US4903124A|1988-03-17|1990-02-20|Canon Kabushiki Kaisha|Image information signal transmission apparatus| FR2633137B1|1988-06-21|1990-11-09|Labo Electronique Physique|HIGH DEFINITION TELEVISION TRANSMISSION AND RECEPTION SYSTEM WITH IMPROVED SPEED ESTIMATOR AND REDUCED DATA RATE| US5335019A|1993-01-14|1994-08-02|Sony Electronics, Inc.|Digital video data quantization error detection as applied to intelligent dynamic companding| JP4610195B2|2001-12-17|2011-01-12|マイクロソフトコーポレーション|Skip macroblock coding| CN101448162B|2001-12-17|2013-01-02|微软公司|Method for processing video image| JP3900000B2|2002-05-07|2007-03-28|ソニー株式会社|Encoding method and apparatus, decoding method and apparatus, and program| US20030231795A1|2002-06-12|2003-12-18|Nokia Corporation|Spatial prediction based intra-coding| JP4324844B2|2003-04-25|2009-09-02|ソニー株式会社|Image decoding apparatus and image decoding method| KR100597402B1|2003-12-01|2006-07-06|삼성전자주식회사|Method for scalable video coding and decoding, and apparatus for the same| KR100596706B1|2003-12-01|2006-07-04|삼성전자주식회사|Method for scalable video coding and decoding, and apparatus for the same| CA2547891C|2003-12-01|2014-08-12|Samsung Electronics Co., Ltd.|Method and apparatus for scalable video encoding and decoding| CN100536573C|2004-01-16|2009-09-02|北京工业大学|Inframe prediction method used for video frequency coding| KR20050112445A|2004-05-25|2005-11-30|경희대학교 산학협력단|Prediction encoder/decoder, prediction encoding/decoding method and recording medium storing a program for performing the method| KR101204788B1|2004-06-03|2012-11-26|삼성전자주식회사|Method of and apparatus for predictive video data encoding and/or decoding| US7953152B1|2004-06-28|2011-05-31|Google Inc.|Video compression and encoding method| KR100654436B1|2004-07-07|2006-12-06|삼성전자주식회사|Method for video encoding and decoding, and video encoder and decoder| WO2006004331A1|2004-07-07|2006-01-12|Samsung Electronics Co., Ltd.|Video encoding and decoding methods and video encoder and decoder| US20060013320A1|2004-07-15|2006-01-19|Oguz Seyfullah H|Methods and apparatus for spatial error concealment| US9055298B2|2005-07-15|2015-06-09|Qualcomm Incorporated|Video encoding method enabling highly efficient partial decoding of H.264 and other transform coded information| CN1275469C|2004-11-10|2006-09-13|华中科技大学|Method for pridicting sortable complex in frame| KR100679025B1|2004-11-12|2007-02-05|삼성전자주식회사|Method for intra-prediction based on multi-layer, and method and apparatus for video coding using it| KR100679031B1|2004-12-03|2007-02-05|삼성전자주식회사|Method for encoding/decoding video based on multi-layer, and apparatus using the method| US20060133507A1|2004-12-06|2006-06-22|Matsushita Electric Industrial Co., Ltd.|Picture information decoding method and picture information encoding method| CN101133650B|2005-04-01|2010-05-19|松下电器产业株式会社|Image decoding apparatus and image decoding method| JP4427003B2|2005-05-23|2010-03-03|オリンパスイメージング株式会社|Data encoding apparatus, data decoding apparatus, data encoding method, data decoding method, and program| KR100716999B1|2005-06-03|2007-05-10|삼성전자주식회사|Method for intra prediction using the symmetry of video, method and apparatus for encoding and decoding video using the same| JP2007043651A|2005-07-05|2007-02-15|Ntt Docomo Inc|Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program| US8155189B2|2005-10-19|2012-04-10|Freescale Semiconductor, Inc.|System and method of coding mode decision for video encoding| KR101246294B1|2006-03-03|2013-03-21|삼성전자주식회사|Method of and apparatus for video intraprediction encoding/decoding| KR100716142B1|2006-09-04|2007-05-11|주식회사 이시티|Method for transferring stereoscopic image data| US9014280B2|2006-10-13|2015-04-21|Qualcomm Incorporated|Video coding with adaptive filtering for motion compensated prediction| JP2008153802A|2006-12-15|2008-07-03|Victor Co Of Japan Ltd|Moving picture encoding device and moving picture encoding program| KR101365574B1|2007-01-29|2014-02-20|삼성전자주식회사|Method and apparatus for video encoding, and Method and apparatus for video decoding| KR101369224B1|2007-03-28|2014-03-05|삼성전자주식회사|Method and apparatus for Video encoding and decoding using motion compensation filtering| JP2008271371A|2007-04-24|2008-11-06|Sharp Corp|Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method, moving image decoding method, and program| JP4799477B2|2007-05-08|2011-10-26|キヤノン株式会社|Image coding apparatus and image coding method| KR101378338B1|2007-06-14|2014-03-28|삼성전자주식회사|Method and apparatus for encoding and decoding based on intra prediction using image inpainting| US8254450B2|2007-08-23|2012-08-28|Nokia Corporation|System and method for providing improved intra-prediction in video coding| CN100562114C|2007-08-30|2009-11-18|上海交通大学|Video encoding/decoding method and decoding device| CN101884219B|2007-10-16|2014-08-20|Lg电子株式会社|A method and an apparatus for processing a video signal| KR101228020B1|2007-12-05|2013-01-30|삼성전자주식회사|Video coding method and apparatus using side matching, and video decoding method and appartus thereof| CN101483780B|2008-01-07|2011-08-24|华为技术有限公司|Method and apparatus for intra-frame prediction| JP5111127B2|2008-01-22|2012-12-26|キヤノン株式会社|Moving picture coding apparatus, control method therefor, and computer program| KR101394209B1|2008-02-13|2014-05-15|삼성전자주식회사|Method for predictive intra coding for image data| WO2009113787A2|2008-03-09|2009-09-17|Lg Electronics Inc.|A method and an apparatus for encoding or decoding a video signal| KR20090097688A|2008-03-12|2009-09-16|삼성전자주식회사|Method and apparatus of encoding/decoding image based on intra prediction| BRPI0911307B1|2008-04-15|2020-09-29|France Telecom|ENCODING AND DECODING AN IMAGE OR A SEQUENCE OF CUTTED IMAGES, ACCORDING TO LINEAR PIXELS PARTITIONS| US20090262801A1|2008-04-17|2009-10-22|Qualcomm Incorporated|Dead zone parameter selections for rate control in video coding| WO2009131508A2|2008-04-23|2009-10-29|Telefonaktiebolaget Lm Ericsson |Template-based pixel block processing| KR101596829B1|2008-05-07|2016-02-23|엘지전자 주식회사|A method and an apparatus for decoding a video signal| PL2958324T3|2008-05-07|2017-05-31|Lg Electronics, Inc.|Method and apparatus for decoding video signal| US8179867B2|2008-05-09|2012-05-15|Lg Electronics Inc.|Apparatus and method for transmission opportunity in mesh network| US20090316788A1|2008-06-23|2009-12-24|Thomson Licensing|Video coding method with non-compressed mode and device implementing the method| EP2154898A3|2008-08-12|2010-06-02|LG Electronics Inc.|Method of processing a video signal| US8213503B2|2008-09-05|2012-07-03|Microsoft Corporation|Skip modes for inter-layer residual video coding and decoding| CN101677406B|2008-09-19|2011-04-20|华为技术有限公司|Method and apparatus for video encoding and decoding| KR101306834B1|2008-09-22|2013-09-10|에스케이텔레콤 주식회사|Video Encoding/Decoding Apparatus and Method by Using Prediction Possibility of Intra Prediction Mode| US10455248B2|2008-10-06|2019-10-22|Lg Electronics Inc.|Method and an apparatus for processing a video signal| CN102204256B|2008-10-31|2014-04-09|法国电信公司|Image prediction method and system| JP5238523B2|2009-01-13|2013-07-17|株式会社日立国際電気|Moving picture encoding apparatus, moving picture decoding apparatus, and moving picture decoding method| US8798158B2|2009-03-11|2014-08-05|Industry Academic Cooperation Foundation Of Kyung Hee University|Method and apparatus for block-based depth map coding and 3D video coding method using the same| JP5169978B2|2009-04-24|2013-03-27|ソニー株式会社|Image processing apparatus and method| US9113169B2|2009-05-07|2015-08-18|Qualcomm Incorporated|Video encoding with temporally constrained spatial dependency for localized decoding| CN101674475B|2009-05-12|2011-06-22|北京合讯数通科技有限公司|Self-adapting interlayer texture prediction method of H.264/SVC| JP5597968B2|2009-07-01|2014-10-01|ソニー株式会社|Image processing apparatus and method, program, and recording medium| KR101510108B1|2009-08-17|2015-04-10|삼성전자주식회사|Method and apparatus for encoding video, and method and apparatus for decoding video| KR101452860B1|2009-08-17|2014-10-23|삼성전자주식회사|Method and apparatus for image encoding, and method and apparatus for image decoding| KR101507344B1|2009-08-21|2015-03-31|에스케이 텔레콤주식회사|Apparatus and Method for intra prediction mode coding using variable length code, and Recording Medium therefor| FR2952497B1|2009-11-09|2012-11-16|Canon Kk|METHOD FOR ENCODING AND DECODING AN IMAGE STREAM; ASSOCIATED DEVICES| JP2011146980A|2010-01-15|2011-07-28|Sony Corp|Image processor and image processing method| KR20110113561A|2010-04-09|2011-10-17|한국전자통신연구원|Method and apparatus for intra prediction encoding and decoding using adaptive filter| US8619857B2|2010-04-09|2013-12-31|Sharp Laboratories Of America, Inc.|Methods and systems for intra prediction| WO2011127964A2|2010-04-13|2011-10-20|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus for intra predicting a block, apparatus for reconstructing a block of a picture, apparatus for reconstructing a block of a picture by intra prediction| US9083974B2|2010-05-17|2015-07-14|Lg Electronics Inc.|Intra prediction modes| CN101877792B|2010-06-17|2012-08-08|无锡中星微电子有限公司|Intra mode prediction method and device and coder| ES2693903T3|2010-08-17|2018-12-14|M&K Holdings Inc.|Apparatus for decoding an intra-prediction mode| CN108668137A|2010-09-27|2018-10-16|Lg 电子株式会社|Method for dividing block and decoding device| US20120121018A1|2010-11-17|2012-05-17|Lsi Corporation|Generating Single-Slice Pictures Using Paralellel Processors| CN107197257B|2010-12-08|2020-09-08|Lg 电子株式会社|Intra prediction method performed by encoding apparatus and decoding apparatus| US20120163457A1|2010-12-28|2012-06-28|Viktor Wahadaniah|Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus| US9930366B2|2011-01-28|2018-03-27|Qualcomm Incorporated|Pixel level adaptive intra-smoothing| CN102685505B|2011-03-10|2014-11-05|华为技术有限公司|Intra-frame prediction method and prediction device| KR101383775B1|2011-05-20|2014-04-14|주식회사 케이티|Method And Apparatus For Intra Prediction|KR100882949B1|2006-08-17|2009-02-10|한국전자통신연구원|Apparatus and method of encoding and decoding using adaptive scanning of DCT coefficients according to the pixel similarity| KR102086145B1|2010-12-13|2020-03-09|한국전자통신연구원|Method for intra prediction and apparatus thereof| KR101383775B1|2011-05-20|2014-04-14|주식회사 케이티|Method And Apparatus For Intra Prediction| US9654785B2|2011-06-09|2017-05-16|Qualcomm Incorporated|Enhanced intra-prediction mode signaling for video coding using neighboring mode| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| WO2014003421A1|2012-06-25|2014-01-03|한양대학교 산학협력단|Video encoding and decoding method| US9386306B2|2012-08-15|2016-07-05|Qualcomm Incorporated|Enhancement layer scan order derivation for scalable video coding| JP5798539B2|2012-09-24|2015-10-21|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method| US10003818B2|2013-10-11|2018-06-19|Sony Corporation|Video coding system with intra prediction mechanism and method of operation thereof| US10129542B2|2013-10-17|2018-11-13|Futurewei Technologies, Inc.|Reference pixel selection and filtering for intra coding of depth map| WO2016072732A1|2014-11-04|2016-05-12|삼성전자 주식회사|Method and apparatus for encoding/decoding video using texture synthesis based prediction mode| US10148953B2|2014-11-10|2018-12-04|Samsung Electronics Co., Ltd.|System and method for intra prediction in video coding| EP3262837A4|2015-02-25|2018-02-28|Telefonaktiebolaget LM Ericsson |Encoding and decoding of inter pictures in a video| US10841593B2|2015-06-18|2020-11-17|Qualcomm Incorporated|Intra prediction and intra mode coding| US20160373770A1|2015-06-18|2016-12-22|Qualcomm Incorporated|Intra prediction and intra mode coding| US20180213224A1|2015-07-20|2018-07-26|Lg Electronics Inc.|Intra prediction method and device in video coding system| US10136131B2|2015-09-11|2018-11-20|Beamr Imaging Ltd.|Video coding apparatus and method| EP3376764A4|2015-11-12|2019-12-04|LG Electronics Inc.|Method and apparatus for coefficient induced intra prediction in image coding system| WO2017091001A1|2015-11-24|2017-06-01|삼성전자 주식회사|Method and apparatus for post-processing intra or inter prediction block on basis of gradient of pixel| KR20180107087A|2016-02-16|2018-10-01|삼성전자주식회사|Intra prediction method for reducing intra prediction error and apparatus therefor| US10390026B2|2016-03-25|2019-08-20|Google Llc|Smart reordering in recursive block partitioning for advanced intra prediction in video coding| KR20180129863A|2016-04-25|2018-12-05|엘지전자 주식회사|Image decoding method and apparatus in video coding system| WO2017188652A1|2016-04-26|2017-11-02|인텔렉추얼디스커버리 주식회사|Method and device for encoding/decoding image| WO2018026166A1|2016-08-01|2018-02-08|한국전자통신연구원|Image encoding/decoding method and apparatus, and recording medium storing bitstream| GB2584942B|2016-12-28|2021-09-29|Arris Entpr Llc|Improved video bitstream coding| EP3410708A1|2017-05-31|2018-12-05|Thomson Licensing|Method and apparatus for intra prediction with interpolation| US20190110052A1|2017-10-06|2019-04-11|Futurewei Technologies, Inc.|Bidirectional intra prediction| KR20190056888A|2017-11-17|2019-05-27|삼성전자주식회사|Apparatus and method for encoding video| US10645381B2|2018-04-30|2020-05-05|Google Llc|Intra-prediction for smooth blocks in image/video| KR102022375B1|2018-08-22|2019-09-18|넥서스일렉트로닉스|Upscale chipset module for ultra-high definition television| CN110896476B|2018-09-13|2021-11-26|阿里巴巴(中国)有限公司|Image processing method, device and storage medium| WO2020204413A1|2019-04-03|2020-10-08|엘지전자 주식회사|Video or image coding for correcting restoration picture| CN111836043A|2019-04-15|2020-10-27|中兴通讯股份有限公司|Code block prediction method, code block decoding method, code block prediction device, and code block decoding device| WO2020256599A1|2019-06-21|2020-12-24|Huawei Technologies Co., Ltd.|Method and apparatus of quantizing coefficients for matrix-based intra prediction technique| WO2020256597A1|2019-06-21|2020-12-24|Huawei Technologies Co., Ltd.|Matrix-based intra prediction for still picture and video coding|
法律状态:
2017-11-24| FG2A| Definitive protection|Ref document number: 2612388 Country of ref document: ES Kind code of ref document: B1 Effective date: 20171124 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR10-2011-0048130|2011-05-20| KR20110048130|2011-05-20| KR1020110065210A|KR101383775B1|2011-05-20|2011-06-30|Method And Apparatus For Intra Prediction| KR10-2011-0065210|2011-06-30| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|